legal liability
How to rein in the AI threat? Let the lawyers loose
Log Off Movement CEO Emma Lembke and teacher Matt Miles discuss the impact of artificial intelligence on kids on'The Story.' Fifty-five percent of Americans are worried by the threat of AI to the future of humanity, according to a recent Monmouth University poll. More than 1,000 AI experts and funders, including Elon Musk and Steve Wozniak, signed a letter calling for a six-month pause in training new AI models. In turn, Time published an article calling for a permanent global ban. However, the problem with these proposals is that they require coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that's much more in line with our existing methods of reining in potentially threatening developments: legal liability.
What We Learned Auditing Sophisticated AI for Bias
A recently passed law in New York City requires audits for bias in AI-based hiring systems. AI systems fail frequently, and bias is often to blame. A recent sampling of headlines features sociological bias in generated images, a chatbot, and a virtual rapper. These examples of denigration and stereotyping are troubling and harmful, but what happens when the same types of systems are used in more sensitive applications? Leading scientific publications assert that algorithms used in healthcare in the U.S. diverted care away from millions of black people.
- North America > United States > New York (0.25)
- North America > United States > District of Columbia > Washington (0.04)
- Europe > Netherlands (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (0.70)
- Government > Regional Government > North America Government > United States Government (0.69)
Artificial intelligence, healthcare, and questions of legal liability
Recent studies suggest that artificial intelligence can help reshape healthcare, particularly in identifying patients before adverse events. Mayo Clinic researchers have found AI could help spot patients at risk of stroke or cognitive decline. Another Mayo Clinic study focused on using AI to identify complications in pregnant patients. Hal Wolf, the president and chief executive officer of the Health Information and Management Systems Society (HIMSS) told Chief Healthcare Executive in a recent interview that he sees health systems turning to AI to identify health risks earlier. "The applications for AI will help in predictive modeling of what to use, where to anticipate diagnoses, how do we maximize the resources in communities," Wolf said.
- Research Report > New Finding (0.35)
- Research Report > Experimental Study (0.35)
- Health & Medicine > Health Care Providers & Services (1.00)
- Health & Medicine > Consumer Health (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.55)
Robot law: Public policy, legal liability, and the new world of autonomous systems
Algorithmic disgorgement might sound like a phrase from a science-fiction horror film. In fact, it's a new tool for regulators to address the consequences of autonomous systems, ordering companies to remove or destroy algorithms and models in their products based on data obtained unfairly or deceptively. This is one of topics and papers to be presented and discussed at We Robot, an annual conference where scholars and technologists discuss legal and policy questions relating to robots and artificial intelligence. We Robot is taking place next week, from Sept. 14-16, at the University of Washington in Seattle, with a virtual option, as well. It's also an example of how the legal and regulatory landscape for robots, AI, and autonomous systems have changed in the decade since the conference was first held at the University of Miami in 2012. "We've come very far," said Ryan Calo, one of the organizers of the conference, a University of Washington law professor who specializes in areas including privacy, artificial intelligence and robots.
Personhood of autonomous systems: Perceived autonomy in computer science
This is the third article in our series on the personhood of autonomous systems. We followed this discussion by talking about Kant's concept of autonomy in the second article. Here, we will make an attempt to understand how autonomy is perceived in the computer science domain. You will often see individuals correlating autonomy with automation. However, both of these mechanisms can be performed separately without human interference.
Research study on the legal liability of autonomous robotics
I found really interesting a study from 2020, titled "Legal liability for Autonomous Robotics", made by Dr. Safaa Fatouh Gomaa, Member of the Faculty of Law of the Egyptian Mansoura University, a study related to legal issues regarding liability related to Artificial Intelligence products, but more specifically, in the production of autonomous robotics. According to the European resolutions of 2017 and 2018, according to Gomaa, the liability rules cover cases where the cause of the robot's actions or missteps can be attributed to a specific human agent such as the manufacturer, the machinist, the holder or the manager, and where this representative could have foreseen and circumvented the robot's dangerous conduct. He also adds that since digital technologies are constantly evolving, due to, patches, updates and software extensions, influencing the behaviour of all mechanisms of the system, it is crucial to identify responsibilities among the different actors in the AI supply chain. Given the complexity of the topic to be covered, the researcher has divided the paper into three sections; section 1 is the historical, international and legal framework for Robots; section 2 is about identifying the legal responsibility for autonomous industrial robotics; finally, section 3 gives an overview of his conclusions. Robot concepts began as legends.
- Africa > Middle East > Egypt (0.05)
- North America > United States > Pennsylvania (0.04)
- Europe (0.04)
- (2 more...)
To Spur Growth in AI, We Need a New Approach to Legal Liability
Artificial intelligence (AI) is sweeping through industries ranging from cybersecurity to environmental protection -- and the Covid-19 pandemic has only accelerated this trend. AI may improve the lives of millions, but it also will inevitably cause accidents that injure people or parties -- indeed, it already has through incidents like autonomous vehicle crashes. An outdated liability system in the United States and other countries, however, is unable to manage these risks, which is a problem because those risks can impede AI innovations and adoption. Therefore, it is crucial that we reform the liability system. Doing so will help speed AI innovations and adoption.
- Law (1.00)
- Health & Medicine > Therapeutic Area (0.72)
- Banking & Finance > Insurance (0.53)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.48)
These Ex-Journalists Are Using AI to Catch Online Defamation
Like many stories about people trying to help fix the internet, this one begins in the aftermath of 2016. From his home in Ireland, Conor Brady had watched the Brexit vote and the election of Donald Trump with disbelief. In his view, the prominence of false stories during each election--whether about Muslim immigrants or Hillary Clinton's health--was the direct consequence of a hollowed-out news industry without the resources to check the spread of disinformation. At the time, Conor's son, Neil--also a former journalist--was working as a digital policy analyst at the Institute of International and European Affairs, researching neural networks and machine learning. The two got to thinking.
- North America > United States (1.00)
- Europe (0.94)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (0.91)
The AI Liability Puzzle and A Fund-Based Work-Around
Erdelyi, Olivia J. (University of Canterbury) | Erdelyi, Gabor
Confidence in the regulatory environment is crucial to enable responsible AI innovation and foster the social acceptance of these powerful new technologies. One notable source of uncertainty is, however, that the existing legal liability system is unable to assign responsibility where a potentially harmful conduct and/or the harm itself are unforeseeable, yet some instantiations of AI and/or the harms they may trigger are not foreseeable in the legal sense. The unpredictability of how courts would handle such cases makes the risks involved in the investment and use of AI difficult to calculate with confidence, creating an environment that is not conducive to innovation and may deprive society of some benefits AI could provide. To tackle this problem, we propose to draw insights from financial regulatory best practices and establish a system of AI guarantee schemes. We envisage the system to form part of the broader market-structuring regulatory frameworks, with the primary function to provide a readily available, clear, and transparent funding mechanism to compensate claims that are either extremely hard or impossible to realize via conventional litigation. We propose it to be at least partially industry-funded. Funding arrangements should depend on whether it would pursue other potential policy goals aimed more broadly at controlling the trajectory of AI innovation to increase economic and social welfare worldwide. Because of the global relevance of the issue, rather than focusing on any particular legal system, we trace relevant developments across multiple jurisdictions and engage in a high-level, comparative conceptual debate around the suitability of the foreseeability concept to limit legal liability. The paper also refrains from confronting the intricacies of the case law of specific jurisdictions for now and—recognizing the importance of this task—leaves this to further research in support of the legal system’s incremental adaptation to the novel challenges of present and future AI technologies. This article appears in the special track on AI and Society.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Oceania > Australia (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (5 more...)
- Law > Statutes (1.00)
- Law > International Law (1.00)
- Law > Criminal Law (1.00)
- (6 more...)
Illegal Pricing Algorithms
On June 6, 2015, the U.S. Department of Justice brought the first-ever online market-place prosecution against a price-fixing cartel. One of the special features of the case was that prices were set by algorithms. Topkins and his competitors designed and shared dynamic pricing algorithms that were programmed to act in conformity with their agreement to set coordinated prices for posters sold online. They were found to engage in an illegal cartel. Following the case, the Assistant Attorney General stated that "[w]e will not tolerate anticompetitive conduct, [even if] it occurs...over the Internet using complex pricing algorithms."
- North America > United States (1.00)
- Europe (0.29)
- Asia > Middle East > Israel > Haifa District > Haifa (0.05)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)